44 research outputs found

    A probabilistic analysis of human influence on recent record global mean temperature changes

    Get PDF
    December 2013 was the 346th consecutive month where global land and ocean average surface temperature exceeded the 20th century monthly average, with February 1985 the last time mean temperature fell below this value. Even given these and other extraordinary statistics, public acceptance of human induced climate change and confidence in the supporting science has declined since 2007. The degree of uncertainty as to whether observed climate changes are due to human activity or are part of natural systems fluctuations remains a major stumbling block to effective adaptation action and risk management. Previous approaches to attribute change include qualitative expert-assessment approaches such as used in IPCC reports and use of 'fingerprinting' methods based on global climate models. Here we develop an alternative approach which provides a rigorous probabilistic statistical assessment of the link between observed climate changes and human activities in a way that can inform formal climate risk assessment. We construct and validate a time series model of anomalous global temperatures to June 2010, using rates of greenhouse gas (GHG) emissions, as well as other causal factors including solar radiation, volcanic forcing and the El Niño Southern Oscillation. When the effect of GHGs is removed, bootstrap simulation of the model reveals that there is less than a one in one hundred thousand chance of observing an unbroken sequence of 304. months (our analysis extends to June 2010) with mean surface temperature exceeding the 20th century average. We also show that one would expect a far greater number of short periods of falling global temperatures (as observed since 1998) if climate change was not occurring. This approach to assessing probabilities of human influence on global temperature could be transferred to other climate variables and extremes allowing enhanced formal risk assessment of climate change. © 2014

    Housing and Health in Ghana: The Psychosocial Impacts of Renting a Home

    Get PDF
    This paper reports the findings of a qualitative study investigating the impacts of renting a home on the psychosocial health of tenants in the Accra Metropolitan Area (AMA) in Ghana. In-depth interviews (n = 33) were conducted with private renters in Adabraka, Accra. The findings show that private renters in the AMA face serious problems in finding appropriate and affordable rental units, as well as a persistent threat of eviction by homeowners. These challenges tend to predispose renters to psychosocial distress and diminishing ontological security. Findings are relevant to a range of pluralistic policy options that emphasize both formal and informal housing provision, together with the reorganization and decentralization of the Rent Control Board to the district level to facilitate easy access by the citizenry

    High-Resolution Melting Genotyping of Enterococcus faecium Based on Multilocus Sequence Typing Derived Single Nucleotide Polymorphisms

    Get PDF
    We have developed a single nucleotide polymorphism (SNP) nucleated high-resolution melting (HRM) technique to genotype Enterococcus faecium. Eight SNPs were derived from the E. faecium multilocus sequence typing (MLST) database and amplified fragments containing these SNPs were interrogated by HRM. We tested the HRM genotyping scheme on 85 E. faecium bloodstream isolates and compared the results with MLST, pulsed-field gel electrophoresis (PFGE) and an allele specific real-time PCR (AS kinetic PCR) SNP typing method. In silico analysis based on predicted HRM curves according to the G+C content of each fragment for all 567 sequence types (STs) in the MLST database together with empiric data from the 85 isolates demonstrated that HRM analysis resolves E. faecium into 231 “melting types” (MelTs) and provides a Simpson's Index of Diversity (D) of 0.991 with respect to MLST. This is a significant improvement on the AS kinetic PCR SNP typing scheme that resolves 61 SNP types with D of 0.95. The MelTs were concordant with the known ST of the isolates. For the 85 isolates, there were 13 PFGE patterns, 17 STs, 14 MelTs and eight SNP types. There was excellent concordance between PFGE, MLST and MelTs with Adjusted Rand Indices of PFGE to MelT 0.936 and ST to MelT 0.973. In conclusion, this HRM based method appears rapid and reproducible. The results are concordant with MLST and the MLST based population structure

    The pig X and Y Chromosomes: structure, sequence, and evolution.

    Get PDF
    We have generated an improved assembly and gene annotation of the pig X Chromosome, and a first draft assembly of the pig Y Chromosome, by sequencing BAC and fosmid clones from Duroc animals and incorporating information from optical mapping and fiber-FISH. The X Chromosome carries 1033 annotated genes, 690 of which are protein coding. Gene order closely matches that found in primates (including humans) and carnivores (including cats and dogs), which is inferred to be ancestral. Nevertheless, several protein-coding genes present on the human X Chromosome were absent from the pig, and 38 pig-specific X-chromosomal genes were annotated, 22 of which were olfactory receptors. The pig Y-specific Chromosome sequence generated here comprises 30 megabases (Mb). A 15-Mb subset of this sequence was assembled, revealing two clusters of male-specific low copy number genes, separated by an ampliconic region including the HSFY gene family, which together make up most of the short arm. Both clusters contain palindromes with high sequence identity, presumably maintained by gene conversion. Many of the ancestral X-related genes previously reported in at least one mammalian Y Chromosome are represented either as active genes or partial sequences. This sequencing project has allowed us to identify genes--both single copy and amplified--on the pig Y Chromosome, to compare the pig X and Y Chromosomes for homologous sequences, and thereby to reveal mechanisms underlying pig X and Y Chromosome evolution.This work was funded by BBSRC grant BB/F021372/1. The Flow Cytometry and Cytogenetics Core Facilities at the Wellcome Trust Sanger Institute and Sanger investigators are funded by the Wellcome Trust (grant number WT098051). K.B., D.C.-S., and J.H. acknowledge support from the Wellcome Trust (WT095908), the BBSRC (BB/I025506/1), and the European Molecular Biology Laboratory. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007–2013) under grant agreement no. 222664 (“Quantomics”).This is the final version of the article. It first appeared from Cold Spring Harbor Laboratory Press via http://dx.doi.org/10.1101/gr.188839.11

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Therapeutic Equivalence of Lansoprzole and Esomeprazole

    No full text

    A probabilistic analysis of human influence on recent record global mean temperature changes

    Get PDF
    The degree of uncertainty as to whether observed climate changes are due to human activity or are part of natural systems fluctuations remains a major stumbling block to effective adaptation action and risk management, according this report. Abstract December 2013 was the 346th consecutive month where global land and ocean average surface temperature exceeded the 20th century monthly average, with February 1985 the last time mean temperature fell below this value. Even given these and other extraordinary statistics, public acceptance of human induced climate change and confidence in the supporting science has declined since 2007. The degree of uncertainty as to whether observed climate changes are due to human activity or are part of natural systems fluctuations remains a major stumbling block to effective adaptation action and risk management. Previous approaches to attribute change include qualitative expert-assessment approaches such as used in IPCC reports and use of ‘fingerprinting’ methods based on global climate models. Here we develop an alternative approach which provides a rigorous probabilistic statistical assessment of the link between observed climate changes and human activities in a way that can inform formal climate risk assessment. We construct and validate a time series model of anomalous global temperatures to June 2010, using rates of greenhouse gas (GHG) emissions, as well as other causal factors including solar radiation, volcanic forcing and the El Niño Southern Oscillation. When the effect of GHGs is removed, bootstrap simulation of the model reveals that there is less than a one in one hundred thousand chance of observing an unbroken sequence of 304 months (our analysis extends to June 2010) with mean surface temperature exceeding the 20th century average. We also show that one would expect a far greater number of short periods of falling global temperatures (as observed since 1998) if climate change was not occurring. This approach to assessing probabilities of human influence on global temperature could be transferred to other climate variables and extremes allowing enhanced formal risk assessment of climate change

    Synoptic to large-scale drivers of minimum temperature variability in Australia – long-term changes

    No full text
    This study documents the importance, and changes in the importance, of a suite of synoptic to large-scale drivers of minimum temperature variability across the Australian region. The drivers investigated are the El Niño – Southern Oscillation (ENSO) as measured by the Southern Oscillation Index, atmospheric blocking, the Southern Annular Mode, and the position and intensity of the subtropical ridge. In most regions, individual drivers generally account for between about 5 and 10% of the interannual variability across Australia, although in some seasons and regions these drivers can collectively account for more than 60% of the observed variability. The amount of minimum temperature variance explained by individual drivers is highest in south-eastern Australia in summer (December–February), where the drivers collectively account for 67% of the variance, due primarily to the relationships with blocking and ENSO. The varying importance of the drivers of minimum temperature variability between seasons and between two discrete periods (i.e. 1960–1984 and 1985–2015) has been investigated. In the more recent period the intensity of the subtropical ridge has played a more important role in minimum temperature variability, particularly in the south-western and south-eastern parts of Australia in summer (December–February), with the position of the subtropical ridge a feature of greater importance over much of Victoria in spring (September–November). For the more recent period the intensity of the subtropical ridge and the southern annular mode have been more important drivers of minimum temperature variability for autumn (March–May) and winter (June–August), respectively, across southern New South Wales and northern Victoria.The authors acknowledge the Australian Bureau of Meteorology (BoM) for provision of its Australian Climate Observations Reference Network – Surface Air Temperature (ACORN–SAT) data and the Queensland Department of Science, Information Technology and Innovation (DSITIA) for provision of its SILO gridded minimum temperature date for analysis. The authors also thank the Earth Systems and Climate Change Hub of the National Environmental Science Program (NESP) for their ongoing support and also acknowledge that this research was made possible via financial support from the Australian Grains Research and Development Corporation (GRDC), under the National Frost Initiative (NFI)

    Bayesian space-time model to analyse frost risk for agriculture in Southeast Australia

    No full text
    Despite a broad pattern of warming in minimum temperatures over the past 50 years, regions of southeastern Australia have experienced increases in frost frequency in recent decades, and more broadly across southern Australia, an extension of the frost window due to an earlier onset and later cessation. Consistent across southern Australia is a later cessation of frosts, with some areas of southeastern Australia experiencing the last frost an average 4 weeks later than in the 1960s (i.e. mean date of last frost for the period 1960–1970 was 19 September versus 22 October for the period 2000–2009). We seek to model the spatial changes in frosts for a region exhibiting the strongest individual station trends, i.e. northern Victoria and southern New South Wales. We identify statistically significant trends at low-lying stations for the month of August and construct and validate a Bayesian space–time model of minimum temperatures, using rates of greenhouse gas (GHG) emissions, as well as other well-understood causal factors including solar radiation, the El Niño Southern Oscillation (ENSO 3.4) and times series data relating to the position (STRP) and intensity (STRI) of subtropical highs and blocking high pressure systems. We assess the performance of this modelling approach against observational records as well as against additive and linear regression modelling approaches using root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE) as well as false alarm and hit rate metrics. The spatiotemporal modelling approach demonstrated considerably better predictive skill than the others, with enhanced performance across all the metrics analysed. This enhanced performance was consistent across each decade and for temperature extremes below 2 °C
    corecore